Semantic Textual Similarity for Machine Translation Evaluation
نویسندگان
چکیده
Machine translation translates a speech of text from the source language to target language. This paper introduces machine translation evaluation by calculating the semantic textual similarity between the machine translated sentences. The similarity score varies by using different values of alpha, beta and ranges semantic similarity [0,5]. The experiment is carried out on SemEval 2017 datasets. The experiment resulted in the highest accuracy for the Spanish-Spanish dataset with Pearson coefficient correlation 0.7969.
منابع مشابه
Semantic Textual Similarity for MT evaluation
This paper describes the system used for our participation in the WMT12 Machine Translation evaluation shared task. We also present a new approach to Machine Translation evaluation based on the recently defined task Semantic Textual Similarity. This problem is addressed using a textual entailment engine entirely based on WordNet semantic features. We described results for the Spanish-English, C...
متن کاملLearning the Impact of Machine Translation Evaluation Metrics for Semantic Textual Similarity
We present a work to evaluate the hypothesis that automatic evaluation metrics developed for Machine Translation (MT) systems have significant impact on predicting semantic similarity scores in Semantic Textual Similarity (STS) task for English, in light of their usage for paraphrase identification. We show that different metrics may have different behaviors and significance along the semantic ...
متن کاملFBK: Machine Translation Evaluation and Word Similarity metrics for Semantic Textual Similarity
This paper describes the participation of FBK in the Semantic Textual Similarity (STS) task organized within Semeval 2012. Our approach explores lexical, syntactic and semantic machine translation evaluation metrics combined with distributional and knowledgebased word similarity metrics. Our best model achieves 60.77% correlation with human judgements (Mean score) and ranked 20 out of 88 submit...
متن کاملUoW: NLP techniques developed at the University of Wolverhampton for Semantic Similarity and Textual Entailment
This paper presents the system submitted by University of Wolverhampton for SemEval-2014 task 1. We proposed a machine learning approach which is based on features extracted using Typed Dependencies, Paraphrasing, Machine Translation evaluation metrics, Quality Estimation metrics and Corpus Pattern Analysis. Our system performed satisfactorily and obtained 0.711 Pearson correlation for the sema...
متن کاملUPC-CORE: What Can Machine Translation Evaluation Metrics and Wikipedia Do for Estimating Semantic Textual Similarity?
In this paper we discuss our participation to the 2013 Semeval Semantic Textual Similarity task. Our core features include (i) a set of metrics borrowed from automatic machine translation, originally intended to evaluate automatic against reference translations and (ii) an instance of explicit semantic analysis, built upon opening paragraphs of Wikipedia 2010 articles. Our similarity estimator ...
متن کامل